Water from Two Rocks: Maximizing the Mutual Information

نویسندگان

  • Yuqing Kong
  • Grant Schoenebeck
چکیده

Our goal is to forecast ground truth Y using two sources of informationXA,XB , without access to any data labeled with ground truth. That is, we are aiming to learn two predictors/hypotheses P ∗ A, P ∗ B such that P ∗ A(XA) and P ∗ B(XB) provide high quality forecasts for ground truth Y , without labeled data. We also want to elicit a high quality forecast for Y from the crowds and pay the crowds immediately, without access to Y . We build a natural connection between the learning question and the mechanism design question and deal with them using the same information theoretic approach. Learning With a natural assumption—conditioning on Y , XA and XB are independent, we reduce the learning question to an optimization problem maxPA,PB MIG (PA, PB) such that solving the learning question is equivalent to picking the P ∗ A, P ∗ B that maximize MIG(PA, PB)—the f -mutual information gain between PA and PB . We also design another family of optimization goals—PS-gain—based on the family of proper scoring rules. We can also reduce the learning problem to the PS-gain optimization problem. We will show a special case of the PS-gain—picking PS as the logarithmic scoring rule LSR—corresponds to the maximum likelihood estimator method. The range of applications of PS-gain is more limited when compared with the range of applications of f -mutual information gain. Moreover, we apply our results to the “learning with noisy labels” problem to learn a predictor that forecasts the ground truth label rather than the noisy label with some side information, without pre-estimating the relationship between the ground truth labels and noisy labels. Mechanism design We design mechanisms that elicit high quality forecasts without verification and have instant rewards for agents by assuming the agents’ information is independent conditioning on Y . In the single-task setting, we propose a forecast elicitation mechanism where truth-telling is a strict equilibrium; in the multi-task setting, we propose a family of forecast elicitation mechanisms where truth-telling is a strict equilibrium and pays better than any other equilibrium.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

General Multimodal Elastic Registration Based on Mutual Information

Recent studies indicate that maximizing the mutual information of the joint histogram of two images is an accurate and robust way to rigidly register two monoor multimodal images. Using mutual information for registration directly in a local manner is often not admissible owing to the weakened statistical power of the local histogram compared to a global one. We propose to use a global joint hi...

متن کامل

Sub-Modularity of Waterfilling with Applications to Online Basestation Allocation

We show that the popular water-filling algorithm for maximizing the mutual information in parallel Gaussian channels is sub-modular. The sub-modularity of water-filling algorithm is then used to derive online basestation allocation algorithms, where mobile users are assigned to one of many possible basestations immediately and irrevocably upon arrival without knowing the future user information...

متن کامل

On Classification of Bivariate Distributions Based on Mutual Information

Among all measures of independence between random variables, mutual information is the only one that is based on information theory. Mutual information takes into account of all kinds of dependencies between variables, i.e., both the linear and non-linear dependencies. In this paper we have classified some well-known bivariate distributions into two classes of distributions based on their mutua...

متن کامل

Statistical mechanics of mutual information maximization

– An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed (Becker S. and Hinton G., Nature, 355 (1992) 161). By exploiting a formal analogy to supervised learning in parity machines, the theory of zero-temperature Gibbs learning for the unsupervised procedure is presented...

متن کامل

An Information Maximization Approach to Overcomplete and Recurrent Representations

The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model consists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is deterministic and the mutual information can be related to the entropy of the output units. Maximizing this entro...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1802.08887  شماره 

صفحات  -

تاریخ انتشار 2018